-
Notifications
You must be signed in to change notification settings - Fork 102
Add Testing Matrix #748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add Testing Matrix #748
Conversation
|
I noticed that |
|
I also added |
|
I corrected some of the errors from
|
|
The tests are failing now because of the following error at build time: It may have something to do with this issue, but it is unclear. Going to try using the latest |
|
So at least the tests are consistent now, failing with |
|
Thanks for getting this all set up and working through some version issues! So if I understand correctly the tests all pass with "opt" compiler flags, but not with "debug" compiler flags. Do you know why we don't get any output from the failed test? And why it doesn't perform the remaining tests? |
|
We need to upload the errors that seem to be appearing for the debug flags. Clearly we should upload the failure text, although it is odd that the interface does not report more. I am trying to reproduce this locally and will report back. The other failure has to do with coveralls and my best guess is that we are trying to upload more than once to coveralls and it does not like that. We can easily pick one of the matrix to upload from, maybe the |
|
I think I unbroke things, but the debug cases are still failing and because PyTest is not completing we are not getting any of the output from what I added (although I am not confident that is working in any case). |
|
@ketch Not catching the output seems to have given some more indication of what's going wrong. I looked into it briefly, but did not immediatley see what's going wrong, or if the failure is really not there and something else is causing it. At least this may be a place to start. |
|
Wow, this has actually uncovered a significant bug! One that has been in my code probably since I wrote it back during my PhD. Not sure how this didn't cause problems before. It's a rather annoying issue: in SharpClaw, there are different options for limiting -- you can limit based on waves, or based on conserved quantities. But that means that the size of the mthlim array should sometimes be num_waves and sometimes num_eqn. I guess the right thing would be to make this array get resized based on which kind of limiting is selected, but that seems awkward and the truth is we almost never use the flexibility of applying different limiters to different fields. So I'm tempted to just say that if you limit conserved quantities, then you can only choose one limiter (i.e. mthlim(1)) and it will get applied to all of them. What do you think @mandli? |
|
I guess my thought is to make mthlim be I am trying to think though, if you had multiple waves for one field say, without having the rest of the limiters indexed by wave, I am not sure how you would distinguish that. Not that you would not be able to do something, we just do not support that currently. Curious what you see as being the full functionality possible though. |
What do you mean by "right size"? Just max(num_eqn, num_waves)? Or exactly the size that will be used based on the limiting method chosen? The first one would be easy to implement, and the only caveat is that some entries of mthlim would sometimes be ignored. I think that approach should be fine.
I don't understand what you're saying... |
Yes, exactly this. We could stick
I was thinking that you were implying that you could do limiting on some of the waves and some of conservative quantites, mixing which you were limiting. If there were more waves than equations though this would seem to be a bit tricky unless the waves were associated with one of the equations. The opposite with more equations then waves also. Mostly this would make sanity checking complex, or maybe we would just ignore making sure a user was not breaking their own code. |
Okay, I'll implement that in a PR tomorrow.
Oh no, we don't do that and I don't want to go there. It's either all waves or all conserved quantities. |
Everytime I start thinking about the complexities of doing so I understand why it's best left alone. |
For sharpclaw, this is sometimes num_eqn and sometimes num_waves depending on options.
|
@ketch made some bug fixes and we identified a long standing "bug" so it looks like we have one more thing to work out with the 3D acoustics test. |
This PR adds a testing matrix so we can test multiple platforms, compilers, and build types. This includes
Most of these are more place-holders as we suss out, which ones work and we expand from there.
This PR also adds archiving of test results, although it currently does not archive anything.